33 research outputs found

    Dynamics and topographic organization of recursive self-organizing maps

    Get PDF
    Recently there has been an outburst of interest in extending topographic maps of vectorial data to more general data structures, such as sequences or trees. However, there is no general consensus as to how best to process sequences using topographicmaps, and this topic remains an active focus of neurocomputational research. The representational capabilities and internal representations of the models are not well understood. Here, we rigorously analyze a generalization of the self-organizingmap (SOM) for processing sequential data, recursive SOM (RecSOM) (Voegtlin, 2002), as a nonautonomous dynamical system consisting of a set of fixed input maps. We argue that contractive fixed-input maps are likely to produce Markovian organizations of receptive fields on the RecSOM map. We derive bounds on parameter β (weighting the importance of importing past information when processing sequences) under which contractiveness of the fixed-input maps is guaranteed. Some generalizations of SOM contain a dynamic module responsible for processing temporal contexts as an integral part of the model. We show that Markovian topographic maps of sequential data can be produced using a simple fixed (nonadaptable) dynamic module externally feeding a standard topographic model designed to process static vectorial data of fixed dimensionality (e.g., SOM). However, by allowing trainable feedback connections, one can obtain Markovian maps with superior memory depth and topography preservation. We elaborate on the importance of non-Markovian organizations in topographic maps of sequential data. © 2006 Massachusetts Institute of Technology

    Look and Feel What and How Recurrent Self-Organizing Maps Learn

    Get PDF
    International audienceThis paper introduces representations and measurements for revealing the inner self-organization that occurs in a 1D recurrent self-organizing map. Experiments show the incredible richness and robustness of an extremely simple architecture when it extracts hidden states of the HMM that feeds it with ambiguous and noisy inputs

    How accurate are the time delay estimates in gravitational lensing?

    Get PDF
    We present a novel approach to estimate the time delay between light curves of multiple images in a gravitationally lensed system, based on Kernel methods in the context of machine learning. We perform various experiments with artificially generated irregularly-sampled data sets to study the effect of the various levels of noise and the presence of gaps of various size in the monitoring data. We compare the performance of our method with various other popular methods of estimating the time delay and conclude, from experiments with artificial data, that our method is least vulnerable to missing data and irregular sampling, within reasonable bounds of Gaussian noise. Thereafter, we use our method to determine the time delays between the two images of quasar Q0957+561 from radio monitoring data at 4 cm and 6 cm, and conclude that if only the observations at epochs common to both wavelengths are used, the time delay gives consistent estimates, which can be combined to yield 408\pm 12 days. The full 6 cm dataset, which covers a longer monitoring period, yields a value which is 10% larger, but this can be attributed to differences in sampling and missing data.Comment: 14 pages, 12 figures; accepted for publication in Astronomy & Astrophysic

    Echo State Property of Deep Reservoir Computing Networks

    Get PDF
    In the last years, the Reservoir Computing (RC) framework has emerged as a state of-the-art approach for efficient learning in temporal domains. Recently, within the RC context, deep Echo State Network (ESN) models have been proposed. Being composed of a stack of multiple non-linear reservoir layers, deep ESNs potentially allow to exploit the advantages of a hierarchical temporal feature representation at different levels of abstraction, at the same time preserving the training efficiency typical of the RC methodology. In this paper, we generalize to the case of deep architectures the fundamental RC conditions related to the Echo State Property (ESP), based on the study of stability and contractivity of the resulting dynamical system. Besides providing a necessary condition and a sufficient condition for the ESP of layered RC networks, the results of our analysis provide also insights on the nature of the state dynamics in hierarchically organized recurrent models. In particular, we find out that by adding layers to a deep reservoir architecture, the regime of network’s dynamics can only be driven towards (equally or) less stable behaviors. Moreover, our investigation shows the intrinsic ability of temporal dynamics differentiation at the different levels in a deep recurrent architecture, with higher layers in the stack characterized by less contractive dynamics. Such theoretical insights are further supported by experimental results that show the effect of layering in terms of a progressively increased short-term memory capacity of the recurrent models

    Coevolutionary systems and PageRank

    Get PDF
    Coevolutionary systems have been used successfully in various problem domains involving situations of strategic decision-making. Central to these systems is a mechanism whereby finite populations of agents compete for reproduction and adapt in response to their interaction outcomes. In competitive settings, agents choose which solutions to implement and outcomes from their behavioral interactions express preferences between the solutions. Recently, we have introduced a framework that provides both qualitative and quantitative characterizations of competitive coevolutionary systems. Its two main features are: (1) A directed graph (digraph) representation that fully captures the underlying structure arising from pairwise preferences over solutions. (2) Coevolutionary processes are modeled as random walks on the digraph. However, one needs to obtain prior, qualitative knowledge of the underlying structures of these coevolutionary digraphs to perform quantitative characterizations on coevolutionary systems and interpret the results. Here, we study a deep connection between coevolutionary systems and PageRank to address this issue. We develop a principled approach to measure and rank the performance (importance) of solutions (vertices) in a given coevolutionary digraph. In PageRank formalism, B transfers part of its authority to A if A dominates B (there is an arc from B to A in the digraph). In this manner, PageRank authority indicates the importance of a vertex. PageRank authorities with suitable normalization have a natural interpretation of long-term visitation probabilities over the digraph by the coevolutionary random walk. We derive closed-form expressions to calculate PageRank authorities for any coevolutionary digraph. We can precisely quantify changes to the authorities due to modifications in restart probability for any coevolutionary system. Our empirical studies demonstrate how PageRank authorities characterize coevolutionary digraphs with different underlying structures

    Computational modeling with spiking neural networks

    Get PDF
    This chapter reviews recent developments in the area of spiking neural networks (SNN) and summarizes the main contributions to this research field. We give background information about the functioning of biological neurons, discuss the most important mathematical neural models along with neural encoding techniques, learning algorithms, and applications of spiking neurons. As a specific application, the functioning of the evolving spiking neural network (eSNN) classification method is presented in detail and the principles of numerous eSNN based applications are highlighted and discussed

    A general framework for a principled hierarchical visualization of multivariate data

    No full text
    We present a general framework for interactive visualization and analysis of multi-dimensional data points. The proposed model is a hierarchical extension of the latent trait family of models developed in [4] as a generalization of GTM to noise models from the exponential family of distributions. As some members of the exponential family of distributions are suitable for modeling discrete observations, we give a brief example of using our methodology in interactive visualization and semantic discovery in a corpus of text-based documents. We also derive formulas for computing local magnification factors of latent trait projection manifolds
    corecore